Pixel Deconvolutional Networks
نویسندگان
چکیده
Deconvolutional layers have been widely used in a variety of deep models for up-sampling, including encoder-decoder networks for semantic segmentation and deep generative models for unsupervised learning. One of the key limitations of deconvolutional operations is that they result in the so-called checkerboard problem. This is caused by the fact that no direct relationship exists among adjacent pixels on the output feature map. To address this problem, we propose the pixel deconvolutional layer (PixelDCL) to establish direct relationships among adjacent pixels on the up-sampled feature map. Our method is based on a fresh interpretation of the regular deconvolution operation. The resulting PixelDCL can be used to replace any deconvolutional layer in a plug-and-play manner without compromising the fully trainable capabilities of original models. The proposed PixelDCL may result in slight decrease in efficiency, but this can be overcome by an implementation trick. Experimental results on semantic segmentation demonstrate that PixelDCL can consider spatial features such as edges and shapes and yields more accurate segmentation outputs than deconvolutional layers. When used in image generation tasks, our PixelDCL can largely overcome the checkerboard problem suffered by regular deconvolution operations.
منابع مشابه
Deep Deconvolutional Networks for Scene Parsing
Scene parsing is an important and challenging problem in computer vision. It requires labeling each pixel in an image with the category it belongs to. Traditionally, it has been approached with hand-engineered features from color information in images. Recently convolutional neural networks (CNNs), which automatically learn hierarchies of features, have achieved record performance on the task. ...
متن کاملLearning Common and Specific Features for RGB-D Semantic Segmentation with Deconvolutional Networks
In this paper, we tackle the problem of RGB-D semantic segmentation of indoor images. We take advantage of deconvolutional networks which can predict pixel-wise class labels, and develop a new structure for deconvolution of multiple modalities. We propose a novel feature transformation network to bridge the convolutional networks and deconvolutional networks. In the feature transformation netwo...
متن کاملStreet-View Change Detection with Deconvolutional Networks
We propose a system for performing structural change detection in street-view videos captured by a vehiclemounted monocular camera over time. Our approach is motivated by the need for more frequent and efficient updates in the large-scale maps used in autonomous vehicle navigation. Our method chains a multi-sensor fusion SLAM and fast dense 3D reconstruction pipeline, which provide coarsely reg...
متن کاملEstimating Full Regional Skeletal Muscle Fibre 2 Curvature from b - Mode Ultrasound Images Using 3 Convolutional - Deconvolutional Neural Networks 4
Direct measurement of strain within muscle is important for understanding muscle 8 function in health and disease. Current technology (kinematics, dynamometry, electromyography) 9 provides limited ability to measure strain within muscle. Regional fiber orientation and length are 10 related with active/passive strain within muscle. Currently, ultrasound imaging provides the only 11 non-invasive ...
متن کاملSalient Deconvolutional Networks
Deconvolution is a popular method for visualizing deep convolutional neural networks; however, due to their heuristic nature, the meaning of deconvolutional visualizations is not entirely clear. In this paper, we introduce a family of reversed networks that generalizes and relates deconvolution, backpropagation and network saliency. We use this construction to thoroughly investigate and compare...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1705.06820 شماره
صفحات -
تاریخ انتشار 2017